首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   338篇
  免费   15篇
电工技术   1篇
综合类   1篇
化学工业   69篇
金属工艺   12篇
机械仪表   12篇
建筑科学   9篇
能源动力   22篇
轻工业   38篇
水利工程   7篇
石油天然气   10篇
武器工业   1篇
无线电   57篇
一般工业技术   47篇
冶金工业   18篇
原子能技术   1篇
自动化技术   48篇
  2024年   1篇
  2023年   8篇
  2022年   11篇
  2021年   18篇
  2020年   15篇
  2019年   20篇
  2018年   17篇
  2017年   12篇
  2016年   7篇
  2015年   7篇
  2014年   15篇
  2013年   31篇
  2012年   21篇
  2011年   17篇
  2010年   15篇
  2009年   13篇
  2008年   10篇
  2007年   12篇
  2006年   13篇
  2005年   4篇
  2004年   4篇
  2003年   5篇
  2002年   9篇
  2001年   5篇
  2000年   8篇
  1999年   3篇
  1998年   3篇
  1997年   3篇
  1996年   5篇
  1995年   7篇
  1994年   4篇
  1993年   1篇
  1992年   1篇
  1991年   4篇
  1990年   4篇
  1989年   2篇
  1988年   2篇
  1987年   2篇
  1986年   2篇
  1985年   2篇
  1984年   1篇
  1982年   1篇
  1980年   2篇
  1979年   1篇
  1977年   1篇
  1973年   1篇
  1970年   1篇
  1969年   1篇
  1961年   1篇
排序方式: 共有353条查询结果,搜索用时 15 毫秒
101.
This paper presents a scheme for classification of faults on double circuit parallel transmission lines using combination of discrete wavelet transform and support vector machine (SVM). Only one cycle post fault of the phase currents was employed to predict the fault type. Two features for each phase current were extracted using discrete wavelet transform. Thus, a total of 12 features were extracted for the six phase currents. The training data were collected, and SVM was employed to establish the fault classification unit. After that, the fault classification unit was tested for different fault states. The power system simulation was conducted using the MATLAB/Simulink program. The proposed technique took into account the mutual coupling between the parallel transmission lines and the randomness of the faults on transmission line considering time of occurrence, fault location, fault type, fault resistance, and loading conditions. The results show that the proposed technique can classify all the faults on the parallel transmission lines correctly. © 2015 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.  相似文献   
102.
The continuous high demand of water resources for agricultural uses in Jordan is leading to a water crisis. A possible partial solution may be to import food which requires large amounts of water to grow instead of cultivating high water consuming crops. Crops such as banana and citrus cause a huge virtual water loss, which can be reduced by cultivating other less water-demanding crops. This paper focuses on analyzing the economic value of cultivating tree fruit from a virtual water perspective. The virtual water calculations in this study depend on the average rainfall, water quota, and the crops’ water requirements (CWR). The gross profit to the water use ratio showed that banana has the lowest value 0.085 JD/m3, while lemon has the highest value 1.65 JD/m3. The calculations show that the average embedded water in fruits varies from about 470 m3/ton for grapes to about 2,500 m3/ton for dates. Banana and citrus plantations consume about 21 and 71 million cubic meters (MCM) annually, respectively, which represent about 85% of the total water consumption in fruit tree plantation. The virtual water flow estimation embedded in fruits shows that Jordan imports about 77 MCM per year. However it exports about 29 MCM per year. The results were analyzed from an integrated water resources management (IWRM) perspective. The analysis shows that a way to recover some of the water costs involved in, e.g., banana production would be to increase the fertilizer cost by about 10%. This would double the water cost and increase the banana production cost by about 6.8%. Using this alternative could be a way to better manage the huge losses in virtual water involved in banana production in the Jordan Valley.  相似文献   
103.
This paper examines the temporal transferability of the zonal accident prediction models by using appropriate evaluation measures of predictive performance to assess whether the relationship between the dependent and independent variables holds reasonably well across time. The two temporal contexts are the years 1996 and 2001, with updated 1996 models being used to predict 2001 accidents in each traffic zone of the City of Toronto. The paper examines alternative updating methods for temporal transfer by imagining that only a sample of 2001 data is available. The sensitivity of the performance of the updated models to the 2001 sample size is explored. The updating procedures examined include the Bayesian updating approach and the application of calibration factors to the 1996 models. Models calibrated for the 2001 samples were also explored, but were found to be inadequate. The results show that the models are not transferable in a strict statistical sense. However, relative measures of transferability indicate that the transferred models yield useful information in the application context. Also, it is concluded that the updated accident models using the calibration factors produce better results for predicting the number of accidents in the year 2001 than using the Bayesian approach.  相似文献   
104.
Reconfigurable architectures are of great interest to system designers to improve the system’s operation and efficiency. In this paper, we propose an adaptive utility interactive photovoltaic (PV) system based on a novel Flexible Switch array Matrix topology. This proposed system maximizes the generated power in real-time in response to operational conditions such as shading, soiling, mismatches, and module failure among others. The proposed system is a compromise in the utilization of power conditioning equipment to maximize energy capture and system efficiency. Simulation results demonstrate an average 13% improvement in efficiency when compared with the central inverter topology performance. A prototype system has been designed and tested. The experimental results validate the proposed topology and its benefits for a wide range of applications.  相似文献   
105.
106.
Modern systems are enormously complex; many applications today comprise millions of lines of code, make extensive use of software frameworks, and run on complex, multi‐tiered, run‐time systems. Understanding the performance of these applications is challenging because it depends on the interactions between the many software and the hardware components. This paper describes and evaluates an interactive and iterative methodology, temporal vertical profiling, for understanding the performance of applications. There are two key insights behind temporal vertical profiling. First, we need to collect and reason across information from multiple layers of the system before we can understand an application's performance. Second, application performance changes over time and thus we must consider the time‐varying behavior of the application instead of aggregate statistics. We have developed temporal vertical profiling from our own experience of analyzing performance anomalies and have found it very helpful for methodically exploring the space of hardware and software components. By representing an application's behavior as a set of metrics, where each metric is represented as a time series, temporal vertical profiling provides a way to reason about performance across system layers, regardless of their level of abstraction, and independent of their semantics. Temporal vertical profiling provides a methodology to explore a large space of metrics, hundreds of metrics even for small benchmarks, in a systematic way. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
107.
The commercialization of the fifth-generation (5G) wireless network has begun. Massive devices are being integrated into 5G-enabled wireless sensor networks (5G WSNs) to deliver a variety of valuable services to network users. However, there are rising fears that 5G WSNs will expose sensitive user data to new security vulnerabilities. For secure end-to-end communication, key agreement and user authentication have been proposed. However, when billions of massive devices are networked to collect and analyze complex user data, more stringent security approaches are required. Data integrity, non-repudiation, and authentication necessitate special-purpose subtree-based signature mechanisms that are pretty difficult to create in practice. To address this issue, this work provides an efficient, provably secure, lightweight subtree-based online/offline signature procedure (SBOOSP) and its aggregation (Agg-SBOOSP) for massive devices in 5G WSNs using conformable chaotic maps. The SBOOSP enables multi-time offline storage access while reducing processing time. As a result, the signer can utilize the pre-stored offline information in polynomial time. This feature distinguishes our presented SBOOSP from previous online/offline-signing procedures that only allow for one signature. Furthermore, the new procedure supports a secret key during the pre-registration process, but no secret key is necessary during the offline stage. The suggested SBOOSP is secure in the logic of unforgeability on the chosen message attack in the random oracle. Additionally, SBOOSP and Agg-SBOOSP had the lowest computing costs compared to other contending schemes. Overall, the suggested SBOOSP outperforms several preliminary security schemes in terms of performance and computational overhead.  相似文献   
108.
It often happens that designers have to integrate different instruction-set processors on a single chip. Typical applications are wireless, image processing, xDSL, network and game processors. This paper deals with the three main problems that make the design of application-specific heterogeneous multiprocessor Systems-on-Chip very hard and expensive:
higher level specification;
software support packages design;
on-chip HW/SW communication design.
  相似文献   
109.
Multiuser detection-oriented CDMA systems have been anticipated to significantly improve system capacity in third-generation W-CDMA-based systems. However, they are greatly limited by the computational complexity of multiuser receivers. In this work, we propose a new, computationally efficient approach to multiuser detection (MUD), consisting in MUD of the subset of preselected users, and conventional detection of the rest of users, called selective multiuser detection (SMD). It allows for full exploitation of available processing power at the receiver by use of MUD and provides remedy for computational complexity of MUD techniques when the number of active users increases beyond the processing capability. We propose and examine three different criteria for selection of users to be processed by the multiuser receiver and analyze the capacity for the single-cell and the multicell CDMA cellular system. The capacity improvement with respect to the conventional CDMA detector combines the gain from MUD and reduction of other-cell interference. We apply the analysis to two SMD schemes using decorrelator and successive interference canceller (SIC) as the multiuser receiver. The results indicate that the SMD is a promising alternative for MUD-oriented CDMA systems with large numbers of active users.  相似文献   
110.

No reference image quality assessment (NR-IQA) has received considerable importance in the last decade due to a rise in the use of multimedia content in our daily lives. Due to limitations in technology, multiple distortions may be introduced in the images that need to be assessed. Recently feature selection has shown promising results for single distorted NR-IQA and their effectiveness on multiple distorted images still need to be addressed. In this paper, impact of feature level fusion and feature selection on multiple distorted image quality assessment is presented. To this end features are extracted from multiple distorted images using six NR-IQA techniques (BLIINDS-II, BRISQUE, CurveletQA, DIIVINE, GM-LOG, SSEQ) that extract features in different (discrete cosine transform, spatial, curvelet transform, wavelet transform, spatial and gradient, spatial and spectral) domains. The extracted features from different domains are fused to generate a single feature vector. All combinations of feature-level fusion from six different techniques have been evaluated. Three different feature selection algorithms (genetic search, linear forward search, particle swarm optimization) are then applied to select optimum features for NR-IQA. The selected features are then used by the support vector regression model to predict the quality score. The performance of the proposed methodology is evaluated for two multiple distorted IQA databases (LIVE multiple distorted image dataset (LIVEMD), multiply distorted image database (MDID2017)), two singly synthetically distorted IQA databases (Tampere image database (TID2013), Computational and subjective image quality database (CSIQ)), and one screen content IQA database (Screen content image quality database (SIQAD)). Experimental results show that the fusion of features from different domains gives better performance in comparison to existing multiple-distorted NR-IQA techniques with SROCC scores of 0.9555, 0.9587, 0.6892, 0.9452, and 0.7682 on the LIVEMD, MDID, TID2013, CSIQ, and SIQAD databases respectively. Moreover, the performance is further improved when the genetic search feature selection algorithm is applied to fused features to remove the redundant and irrelevant features. The SROCC scores are improved to 0.9691, 0.9723, and 0.6897 for LIVEMD, MDID, and TID2013 databases respectively.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号